-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-8084][feat] Enhance the overlap shceduler for two-model spec decoding #8706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
104b975 to
1cec683
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #23393 [ run ] triggered by Bot. Commit: |
55a6c7a to
00e94d8
Compare
|
/bot run |
|
PR_Github #23514 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughThis pull request extends the speculative decoding pipeline to support overlap scheduling for draft model execution. It adds GPU-based draft token processing, pre-allocated CUDA buffers for draft state management, and propagates draft-token updates through the model engine and executor loops. New parameters track draft tokens and accepted states throughout the forward pass to enable efficient overlap-aware execution. Changes
Sequence DiagramsequenceDiagram
participant Executor
participant ModelEngine
participant DraftModel
participant TargetModel
rect rgb(200, 220, 255)
Note over Executor: Overlap Draft Iteration
Executor->>DraftModel: forward_draft_model(draft_batch, num_accepted_tokens_device)
DraftModel->>DraftModel: process draft tokens via GPU buffers
DraftModel-->>Executor: draft_outputs, draft_tensors
end
rect rgb(220, 240, 220)
Note over Executor: Process & Accumulate Draft Tokens
Executor->>Executor: _accept_draft_tokens(target_outputs, target_inputs)
Executor->>Executor: populate draft_tokens_accumulator per-request
Executor-->>Executor: return new_target_inputs
end
rect rgb(240, 220, 220)
Note over Executor: Submit Target Model with Draft Tokens
Executor->>TargetModel: forward(scheduled_requests, new_target_inputs)
TargetModel->>TargetModel: execute with updated input_ids from draft
TargetModel-->>Executor: target_outputs
end
rect rgb(255, 240, 200)
Note over Executor: Cleanup & Next Iteration
Executor->>Executor: cleanup_previous_draft_resources()
Executor->>Executor: store current batch as previous_draft_batch
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)
835-841: Fix NameError: draft_layer_id used before assignment.The condition references draft_layer_id before it’s defined, which will raise at runtime whenever use_dynamic_tree is True.
Apply this diff to define draft_layer_id before use and drop the later duplicate:
- if ( - spec_tree_manager.use_dynamic_tree - # FIXME: 'draft_layer_id' is undefined - and draft_layer_id == spec_tree_manager.max_draft_len - 1 # noqa: F821 - ): + draft_layer_id = spec_tree_manager.cur_draft_layer_idx + if ( + spec_tree_manager.use_dynamic_tree + and draft_layer_id == spec_tree_manager.max_draft_len - 1 + ): # TODO: Re-sample the draft tokens for the last layer. raise NotImplementedError("Dynamic tree is not fully supported yet.") @@ - draft_layer_id = spec_tree_manager.cur_draft_layer_idx
760-768: Out-of-bounds indexing confirmed for dynamic tree speculative decoding.The review comment correctly identifies a vulnerability. In
SpecTreeManager, the dynamic tree path initializestop_k_listas a single tensor with shape[dynamic_tree_max_topK]where each element equalsdynamic_tree_max_topK. Whensum()is called, this yieldsdynamic_tree_max_topK², which can easily exceedmax_total_draft_tokens + 1. The loop then indexesnew_tokens_list[step]with unboundedstepvalues, causing out-of-bounds access inadd_token()at line 301.The TODO comment on line 763 ("For the last layer of the dynamic tree, we need to resampling all the draft tokens") explicitly confirms this code path is incomplete for dynamic trees.
Recommendation: Add bounds validation before the loop—either clamp
itomax_total_draft_tokensor assertcur_layer_num_nodes <= max_total_draft_tokens + 1with supporting invariant documentation.
🧹 Nitpick comments (4)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
860-873: Top‑K per node extraction: ensure shapes and devices match.The per-node topk uses raw_logits[request_index * num_logits_per_request + i, :].indices. Verify that logits and request_index are on the same device, and that top_k_list is a CUDA tensor if raw_logits is CUDA. Otherwise, implicit device sync/copies may occur.
- Assert raw_logits.device == new_draft_tokens_cuda.device and move top_k_list to that device before use.
tests/unittest/_torch/speculative/test_eagle3.py (2)
125-131: Conditional AR test gating matches overlap support.Gating AR when attn backend != TRTLLM or chain drafter off is reasonable given overlap constraints. Consider logging skip reason to aid diagnosis in CI.
137-147: Acceptance-rate metric: handle zero drafted tokens defensively.num_drafted += max_draft_len assumes every step drafts max_draft_len. If the drafter shortens at tail, accept_rate can be inflated. Optionally, derive drafted count from actual buffered draft tokens if available to tighten the assertion.
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
2408-2451: _handle_speculative_decoding: clarify return semantics and state effects.
- The method both computes acceptance and triggers drafter overlap, and returns new_target_inputs for the next forward. Consider documenting that new_target_inputs is what ModelEngine.forward consumes.
- After calling generate_draft_tokens_with_overlap(..., num_accepted_tokens_device), you set has_previous_draft_tokens based on new_target_inputs.next_draft_tokens presence. Good. Ensure next_draft_tokens is indeed populated by the drafter before the following iteration; otherwise forward won’t overlap.
Add a brief docstring and an assert guarding type of new_target_inputs when overlap is active.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
tensorrt_llm/_torch/pyexecutor/model_engine.py(17 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py(10 hunks)tensorrt_llm/_torch/pyexecutor/sampler.py(1 hunks)tensorrt_llm/_torch/speculative/model_drafter.py(26 hunks)tensorrt_llm/scripts/install_tensorrt.sh(1 hunks)tests/unittest/_torch/speculative/test_eagle3.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/_torch/pyexecutor/sampler.pytests/unittest/_torch/speculative/test_eagle3.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/speculative/model_drafter.pytensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/_torch/pyexecutor/sampler.pytests/unittest/_torch/speculative/test_eagle3.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/speculative/model_drafter.pytensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/_torch/pyexecutor/sampler.pytests/unittest/_torch/speculative/test_eagle3.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/speculative/model_drafter.pytensorrt_llm/_torch/pyexecutor/model_engine.py
🧠 Learnings (12)
📚 Learning: 2025-08-28T10:25:22.370Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:887-891
Timestamp: 2025-08-28T10:25:22.370Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, the draft_probs and target_probs tensors have shapes [1, steps] not [steps, vocab_size] as might be expected, making the .squeeze(0) operations appropriate for removing the batch dimension of size 1.
Applied to files:
tensorrt_llm/_torch/pyexecutor/sampler.py
📚 Learning: 2025-08-28T10:22:02.288Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:1191-1197
Timestamp: 2025-08-28T10:22:02.288Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, the object identity comparison `softmax_req_indices is not group_req_indices_cuda` on line ~1191 is intentional and used as an optimization to determine whether to reuse an existing indexer or create a new one, based on which code path was taken during tensor assignment.
Applied to files:
tensorrt_llm/_torch/pyexecutor/sampler.py
📚 Learning: 2025-08-18T09:08:07.687Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 6984
File: cpp/tensorrt_llm/CMakeLists.txt:297-299
Timestamp: 2025-08-18T09:08:07.687Z
Learning: In the TensorRT-LLM project, artifacts are manually copied rather than installed via `cmake --install`, so INSTALL_RPATH properties are not needed - only BUILD_RPATH affects the final artifacts.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-15T06:46:53.813Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:53.813Z
Learning: In the TensorRT-LLM KV cache manager, SWA (Sliding Window Attention) combined with beam search is currently in a broken/non-functional state and is planned for future rework. During preparatory refactoring phases, code related to SWA+beam search may intentionally remain in a non-working state until the broader rework is completed.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
🧬 Code graph analysis (5)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
get_draft_token_length(777-788)
tests/unittest/_torch/speculative/test_eagle3.py (1)
tests/integration/defs/accuracy/test_disaggregated_serving.py (2)
generate_async(277-283)outputs(38-39)
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
tensorrt_llm/_torch/attention_backend/trtllm.py (1)
TrtllmAttention(1150-1590)tensorrt_llm/_torch/speculative/model_drafter.py (2)
ModelDrafter(52-948)generate_draft_tokens_with_overlap(774-880)tensorrt_llm/_torch/speculative/mtp.py (1)
SampleStateTensorsMTP(26-28)
tensorrt_llm/_torch/speculative/model_drafter.py (7)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
LlmRequest(422-631)tensorrt_llm/_torch/pyexecutor/scheduler.py (3)
ScheduledRequests(20-41)all_requests(40-41)batch_size(37-38)tensorrt_llm/_torch/pyexecutor/sampler.py (7)
SampleStateTensors(83-88)SampleState(92-98)update_requests(121-126)update_requests(161-172)update_requests(212-231)update_requests(990-1032)update_requests(1862-1880)tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
forward(77-84)forward(2502-2604)tensorrt_llm/_torch/speculative/mtp.py (4)
forward(341-538)forward(1172-1360)SampleStateTensorsMTP(26-28)update_requests(251-280)tensorrt_llm/_torch/speculative/drafting_loops.py (1)
forward(119-154)tensorrt_llm/_torch/speculative/drafter.py (1)
pad_draft_tokens_for_cuda_graph(57-69)
tensorrt_llm/_torch/pyexecutor/model_engine.py (6)
tensorrt_llm/_torch/pyexecutor/llm_request.py (4)
LlmRequest(422-631)get_draft_token_length(777-788)append(101-127)append(192-209)tensorrt_llm/_torch/speculative/eagle3.py (2)
forward(325-447)Eagle3ResourceManager(23-107)tensorrt_llm/_torch/pyexecutor/scheduler.py (2)
ScheduledRequests(20-41)batch_size(37-38)tensorrt_llm/_torch/pyexecutor/resource_manager.py (2)
ResourceManager(1287-1330)get_resource_manager(1299-1300)tensorrt_llm/_torch/pyexecutor/sampler.py (1)
SampleStateTensors(83-88)tensorrt_llm/_torch/speculative/utils.py (1)
SpecDecodingTensor(253-264)
🪛 Ruff (0.14.3)
tensorrt_llm/_torch/speculative/model_drafter.py
897-897: Avoid specifying long messages outside the exception class
(TRY003)
900-900: Avoid specifying long messages outside the exception class
(TRY003)
🪛 Shellcheck (0.11.0)
tensorrt_llm/scripts/install_tensorrt.sh
[warning] 36-36: Quote the right-hand side of != in [[ ]] to prevent glob matching.
(SC2053)
[warning] 86-86: ARCH2 appears unused. Verify use (or export if used externally).
(SC2034)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (5)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)
280-283: Grouping condition aligns with draft-token presence.Switching to get_draft_token_length(req) > 0 correctly reflects overlap path where py_draft_logits may be unset. Looks good.
784-825: Code logic is correct; review concern is unfounded.The test file
tests/unittest/_torch/speculative/test_draft_token_tree_verification.pyvalidates this logic with concrete examples. Test cases show:
- 2 draft tokens accepted (test case 2):
cumprod().sum()= 2 →+1= 3 → loop 3 times → return3-1= 2 ✓- 3 draft tokens accepted (test case 3):
cumprod().sum()= 3 →+1= 4 → loop 4 times → return4-1= 3 ✓The
+1includes the root node in tree traversal (foradd_tokencalls), while the-1excludes it from the returned count of accepted draft tokens. This pairing is intentional and correct—the net effect returns the count of draft tokens accepted (excluding the implicit root).The tests pass as expected, confirming the off-by-one concern is incorrect.
tests/unittest/_torch/speculative/test_eagle3.py (1)
525-526: CDL sampling uses temperature=1.0, top_p=0.9.Intentional randomness increase looks fine and should avoid degenerate greedy paths.
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
276-294: Overlap scheduler gating: good guardrails with clear fallback.The overlap_not_supported check and fallback to non-overlap loop are sensible. Warning message explains the constraint.
- Ensure draft_model_engine.attn_backend is a class (not instance); issubclass(...) will raise otherwise. If it can be an instance, switch to isinstance(..., TrtllmAttention).
1066-1071: Prepare draft requests early: aligns scheduler/KV manager state.Always initializing py_draft_tokens/pages before scheduling avoids mismatches. Good change.
| self.draft_first_draft_indices_cuda[:total_tokens].copy_( | ||
| idx_tensor_cpu, non_blocking=True) | ||
| self.draft_first_draft_seq_slots_cuda[:total_tokens].copy_( | ||
| seq_slots_tensor_cpu, non_blocking=True) | ||
|
|
||
| # Create token position indices (repeating 0..tokens_per_request for each request) | ||
| token_positions = torch.arange( | ||
| tokens_per_request, dtype=torch.long, | ||
| device='cuda').repeat(num_requests) | ||
|
|
||
| self.input_ids_cuda[ | ||
| self. | ||
| draft_first_draft_indices_cuda[:total_tokens]] = new_tensors_device.new_tokens[ | ||
| token_positions, self. | ||
| draft_first_draft_seq_slots_cuda[:total_tokens], 0] | ||
|
|
||
| if num_draft_tokens > 0: | ||
| draft_tokens = torch.tensor(draft_tokens, | ||
| dtype=torch.int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix per-token offsets when mixing first-draft and generation slices.
first_draft_input_ids_positions can contain both first-draft requests (length original_max_draft_len + 1) and single-token generation updates. The current torch.arange(...).repeat(num_requests) assumes every entry spans the same number of tokens, so as soon as both kinds co-exist the index tensor is longer than total_tokens. At runtime this yields shape mismatches and we end up writing past the intended slices, breaking GPU-side draft updates. Compute offsets per slice instead and copy them into a pre-allocated buffer alongside the other draft buffers, e.g.:
@@
self.draft_first_draft_indices_cuda[:total_tokens].copy_(
idx_tensor_cpu, non_blocking=True)
self.draft_first_draft_seq_slots_cuda[:total_tokens].copy_(
seq_slots_tensor_cpu, non_blocking=True)
-
- # Create token position indices (repeating 0..tokens_per_request for each request)
- token_positions = torch.arange(
- tokens_per_request, dtype=torch.long,
- device='cuda').repeat(num_requests)
-
- self.input_ids_cuda[
- self.
- draft_first_draft_indices_cuda[:total_tokens]] = new_tensors_device.new_tokens[
- token_positions, self.
- draft_first_draft_seq_slots_cuda[:total_tokens], 0]
+ token_offsets_cpu = torch.tensor([
+ offset
+ for start_idx, end_idx, _ in first_draft_input_ids_positions
+ for offset in range(end_idx - start_idx)
+ ],
+ dtype=torch.long,
+ pin_memory=True)
+ self.draft_first_draft_token_positions_cuda[:total_tokens].copy_(
+ token_offsets_cpu, non_blocking=True)
+ self.input_ids_cuda[self.draft_first_draft_indices_cuda[:total_tokens]] = (
+ new_tensors_device.new_tokens[
+ self.draft_first_draft_token_positions_cuda[:total_tokens],
+ self.draft_first_draft_seq_slots_cuda[:total_tokens], 0])and initialize self.draft_first_draft_token_positions_cuda in the draft-model buffer block with size max_first_draft_tokens. This keeps shapes aligned and prevents the crash.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/pyexecutor/model_engine.py around lines 1688 to 1706, the
code uses torch.arange(...).repeat(num_requests) which assumes each draft slice
has the same token count and therefore produces an index tensor longer than
total_tokens when mixing full first-draft slices and single-token generation
updates; replace this with per-slice offset computation: for each slice compute
its token positions as arange(slice_len) and write those positions into a
pre-allocated CUDA buffer (e.g. self.draft_first_draft_token_positions_cuda) at
the same indices used for draft_first_draft_indices_cuda/seq_slots_cuda, ensure
the buffer is initialized in the draft-model buffer block to
max_first_draft_tokens, and when indexing self.input_ids_cuda use only the first
total_tokens entries from that per-slice positions buffer so shapes always align
and no out-of-range writes occur.
| if req.py_request_id not in self.draft_tokens_accumulator: | ||
| self.draft_tokens_accumulator[ | ||
| req.py_request_id] = [0] * self.max_total_draft_tokens | ||
| self.draft_tokens_accumulator[req.py_request_id][ | ||
| draft_position - 1] = req.get_last_tokens(0) | ||
| target_model_req.py_draft_logits = req.py_result.generation_logits # forwards Nones | ||
| if req.state != LlmRequestState.GENERATION_COMPLETE and draft_position < self.max_draft_len: | ||
|
|
||
| # Check against the accumulator length instead | ||
| accumulated_tokens_count = len( | ||
| self.draft_tokens_accumulator[req.py_request_id]) | ||
| if req.state != LlmRequestState.GENERATION_COMPLETE and accumulated_tokens_count < target_model_req.py_draft_pages_allocated: | ||
| new_requests.append(req) | ||
| else: | ||
| self.draft_seq_slot_manager.free_resources(req) | ||
| if cleanup_resources: | ||
| self.draft_seq_slot_manager.free_resources(req) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let drafting continue until all allocated slots are filled.
accumulated_tokens_count = len(self.draft_tokens_accumulator[req_id]) always equals self.max_total_draft_tokens, so after the very first iteration the < py_draft_pages_allocated test fails even though we have only written a single token. That empties draft_batch.generation_requests, halting drafting prematurely. Track how many slots we have actually produced instead (the current draft_position already carries that info) and keep requests alive until we reach py_draft_pages_allocated, e.g.:
- if req.py_request_id not in self.draft_tokens_accumulator:
- self.draft_tokens_accumulator[
- req.py_request_id] = [0] * self.max_total_draft_tokens
- self.draft_tokens_accumulator[req.py_request_id][
- draft_position - 1] = req.get_last_tokens(0)
+ if req.py_request_id not in self.draft_tokens_accumulator:
+ self.draft_tokens_accumulator[
+ req.py_request_id] = [0] * self.max_total_draft_tokens
+ self.draft_tokens_accumulator[req.py_request_id][draft_position -
+ 1] = req.get_last_tokens(
+ 0)
@@
- accumulated_tokens_count = len(
- self.draft_tokens_accumulator[req.py_request_id])
- if req.state != LlmRequestState.GENERATION_COMPLETE and accumulated_tokens_count < target_model_req.py_draft_pages_allocated:
+ slots_filled = min(draft_position,
+ target_model_req.py_draft_pages_allocated)
+ if (req.state != LlmRequestState.GENERATION_COMPLETE
+ and slots_filled < target_model_req.py_draft_pages_allocated):
new_requests.append(req)
else:
if cleanup_resources:
self.draft_seq_slot_manager.free_resources(req)This keeps the drafting loop alive until each request consumes its allocated pages (or finishes) and fixes the early termination.
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/speculative/model_drafter.py around lines 436 to 451, the
code uses accumulated_tokens_count =
len(self.draft_tokens_accumulator[req.py_request_id]) which always equals
self.max_total_draft_tokens and causes drafting to stop prematurely; replace
that length-based check with an actual count of produced draft slots (use the
existing draft_position value or count non-empty entries for the request) and
compare that produced_count against target_model_req.py_draft_pages_allocated so
requests stay in new_requests until they have produced their allocated pages
(leave the cleanup_resources free_resources call as-is when finishing).
|
|
||
| ARCH=$(uname -m) | ||
| if [ "$ARCH" = "x86_64" ];then ARCH1="x86_64" && ARCH2="x64" && ARCH3=$ARCH1;fi | ||
| if [ "$ARCH" = "aarch64" ];then ARCH1="aarch64" && ARCH2="aarch64sbsa" && ARCH3="sbsa";fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused variable ARCH2.
ARCH2 is assigned on line 86 but never used in subsequent commands (only ARCH1 and ARCH3 appear in lines 90–113).
- if [ "$ARCH" = "aarch64" ];then ARCH1="aarch64" && ARCH2="aarch64sbsa" && ARCH3="sbsa";fi
+ if [ "$ARCH" = "aarch64" ];then ARCH1="aarch64" && ARCH3="sbsa";fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ "$ARCH" = "aarch64" ];then ARCH1="aarch64" && ARCH2="aarch64sbsa" && ARCH3="sbsa";fi | |
| if [ "$ARCH" = "aarch64" ];then ARCH1="aarch64" && ARCH3="sbsa";fi |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 86-86: ARCH2 appears unused. Verify use (or export if used externally).
(SC2034)
🤖 Prompt for AI Agents
In tensorrt_llm/scripts/install_tensorrt.sh around line 86, the conditional
assigns ARCH2 but that variable is never referenced later; remove the unused
ARCH2 assignment from the line (leave ARCH1 and ARCH3 assignment logic intact)
so the script only sets ARCH1 and ARCH3 when ARCH equals "aarch64".
| mv /usr/local/TensorRT-${TRT_VER} /usr/local/tensorrt | ||
| pip3 install --no-cache-dir /usr/local/tensorrt/python/tensorrt-*-cp${PARSED_PY_VERSION}-*.whl | ||
| rm -rf /tmp/TensorRT.tar | ||
| echo 'export LD_LIBRARY_PATH=/usr/local/tensorrt/lib:$LD_LIBRARY_PATH' >> "${ENV}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undefined variable ${ENV} will cause incorrect LD_LIBRARY_PATH configuration.
The ${ENV} variable is used without being defined or passed as a parameter. This will append to an empty string, resulting in incorrect or missing LD_LIBRARY_PATH updates across sessions.
Clarify the intent and provide a fix. If the goal is to persist LD_LIBRARY_PATH in the user's shell profile, consider:
-echo 'export LD_LIBRARY_PATH=/usr/local/tensorrt/lib:$LD_LIBRARY_PATH' >> "${ENV}"
+echo 'export LD_LIBRARY_PATH=/usr/local/tensorrt/lib:$LD_LIBRARY_PATH' >> /etc/profile.d/tensorrt.shOr accept ${ENV} as a parameter:
+install_tensorrt() {
+ if [ -z "${ENV}" ]; then
+ echo "Error: ENV variable not set" >&2
+ return 1
+ fi
PY_VERSION=$(python3 -c 'import sys; print(".".join(map(str, sys.version_info[0:2])))')
...📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo 'export LD_LIBRARY_PATH=/usr/local/tensorrt/lib:$LD_LIBRARY_PATH' >> "${ENV}" | |
| echo 'export LD_LIBRARY_PATH=/usr/local/tensorrt/lib:$LD_LIBRARY_PATH' >> /etc/profile.d/tensorrt.sh |
🤖 Prompt for AI Agents
In tensorrt_llm/scripts/install_tensorrt.sh around line 140, the script appends
LD_LIBRARY_PATH to an undefined ${ENV} variable which will write to an empty
path; either explicitly target a well-known profile (for example ~/.bashrc or
create /etc/profile.d/tensorrt.sh) and append the export line there, or accept
and validate an ENV parameter (set a default if empty), failing early with a
clear error if not provided; ensure you quote and append the export line to the
chosen file only after confirming the path exists and the variable is non-empty.
|
PR_Github #23514 [ run ] completed with state |
5920ecb to
589164a
Compare
|
/bot run |
|
PR_Github #23566 [ run ] triggered by Bot. Commit: |
|
PR_Github #23566 [ run ] completed with state |
…ding Signed-off-by: ziyixiong-nv <[email protected]>
589164a to
f8b4ffb
Compare
|
/bot run |
|
PR_Github #23684 [ run ] triggered by Bot. Commit: |
|
PR_Github #23684 [ run ] completed with state |
|
/bot run |
|
PR_Github #23778 [ run ] triggered by Bot. Commit: |
| # so it would fallback to the non-overlap scheduler. | ||
| if not disable_overlap_scheduler and (attn_backend != "TRTLLM" | ||
| or not use_chain_drafter): | ||
| run_ar_test = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is correct but why do we need to skip the AR test here?
| overlap_not_supported = self.drafter is not None and isinstance( | ||
| self.drafter, ModelDrafter) and ( | ||
| not self.drafter.use_static_draft_loop or not issubclass( | ||
| self.draft_model_engine.attn_backend, TrtllmAttention)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we put this logic here instead? https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/_torch/pyexecutor/py_executor_creator.py#L284
Linked code will also disable overlap for algorithms that fundamentally don't support it like ngram
| num_accepted_tokens = torch.zeros(batch_size, | ||
| dtype=torch.int32, | ||
| device=device) | ||
| # Handle case where there are no draft tokens |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment contradicts next line
| draft_tokens_gen = draft_tokens[ | ||
| num_contexts:, :].int() # [num_gens, max_draft_len] | ||
| num_accepted_tokens[num_contexts:] += torch.cumprod( | ||
| (draft_tokens_gen == gen_target_tokens[:, :max_draft_len] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess rejection sampler is still TBD? What happens if you try to use non-greedy sampling with overlap scheduler on? The chain drafter should work with non-greedy nowadays
|
PR_Github #23778 [ run ] completed with state |
|
We will also want to re-enable overlap scheduler in most of the unit tests in I'm not 100% sure this is related to the overlap scheduler. It seems to be a weird interaction with dynamic spec decode. Speculation gets turned off in my benchmarks because the max seq len for the llama 3 eagle head is only 2048 |
Summary by CodeRabbit
Release Notes
New Features
Improvements
Description
The PR is used to enhance the overlap scheduler for two-model spec decoding.
In the previous implementation, we need to validate the draft tokens on host, so we must run the target_mode's update_requests before forwarding the draft model.
In this PR, we validate the draft tokens on GPU, and then forward the draft model directly based on the validation results.
The new workflow is as below.

Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.